55 research outputs found

    Using Negotiation to Reduce Redundant Autonomous Mobile Program Movements

    Get PDF
    Distributed load managers exhibit thrashing where tasks are repeatedly moved between locations due to incomplete global load information. This paper shows that systems of Autonomous Mobile Programs (AMPs) exhibit the same behaviour, identifying two types of redundant movement and terming them greedy effects. AMPs are unusual in that, in place of some external load management system, each AMP periodically recalculates network and program parameters and may independently move to a better execution environment. Load management emerges from the behaviour of collections of AMPs. The paper explores the extent of greedy effects by simulation, and then proposes negotiating AMPs (NAMPs) to ameliorate the problem. We present the design of AMPs with a competitive negotiation scheme (cNAMPs), and compare their performance with AMPs by simulation

    Reliable scalable symbolic computation: The design of SymGridPar2

    Get PDF
    Symbolic computation is an important area of both Mathematics and Computer Science, with many large computations that would benefit from parallel execution. Symbolic computations are, however, challenging to parallelise as they have complex data and control structures, and both dynamic and highly irregular parallelism. The SymGridPar framework (SGP) has been developed to address these challenges on small-scale parallel architectures. However the multicore revolution means that the number of cores and the number of failures are growing exponentially, and that the communication topology is becoming increasingly complex. Hence an improved parallel symbolic computation framework is required. This paper presents the design and initial evaluation of SymGridPar2 (SGP2), a successor to SymGridPar that is designed to provide scalability onto 10^5 cores, and hence also provide fault tolerance. We present the SGP2 design goals, principles and architecture. We describe how scalability is achieved using layering and by allowing the programmer to control task placement. We outline how fault tolerance is provided by supervising remote computations, and outline higher-level fault tolerance abstractions. We describe the SGP2 implementation status and development plans. We report the scalability and efficiency, including weak scaling to about 32,000 cores, and investigate the overheads of tolerating faults for simple symbolic computations

    Reliable scalable symbolic computation: The design of SymGridPar2

    Get PDF
    Symbolic computation is an important area of both Mathematics and Computer Science, with many large computations that would benefit from parallel execution. Symbolic computations are, however, challenging to parallelise as they have complex data and control structures, and both dynamic and highly irregular parallelism. The SymGridPar framework (SGP) has been developed to address these challenges on small-scale parallel architectures. However the multicore revolution means that the number of cores and the number of failures are growing exponentially, and that the communication topology is becoming increasingly complex. Hence an improved parallel symbolic computation framework is required. This paper presents the design and initial evaluation of SymGridPar2 (SGP2), a successor to SymGridPar that is designed to provide scalability onto 10^5 cores, and hence also provide fault tolerance. We present the SGP2 design goals, principles and architecture. We describe how scalability is achieved using layering and by allowing the programmer to control task placement. We outline how fault tolerance is provided by supervising remote computations, and outline higher-level fault tolerance abstractions. We describe the SGP2 implementation status and development plans. We report the scalability and efficiency, including weak scaling to about 32,000 cores, and investigate the overheads of tolerating faults for simple symbolic computations

    A functional database

    No full text
    This thesis explores the use of functional languages to implement, manipulate and query databases. Implementing databases. A functional language is used to construct a database manager that allows efficient and concurrent access to shared data. In contrast to the locking mechanism found in conventional databases, the functional database uses data dependency to provide exclusion. Results obtained from a prototype database demonstrate that data dependency permits an unusual degree of concurrency between operations on the data. The prototype database is used to exhibit some problems that seriously restrict concurrency and also to demonstrate the resolution of these problems using a new primitive. The design of a more realistic database is outlined. Some restrictions on the data structures that can be used in a functional database are also uncovered. Manipulating databases. Functions over the database are shown to provide a powerful manipulation language. How to make such functions atomic is described. Such atomic transaction-functions permit consistent concurrent transformations of a database. Some issues in the transaction model are also addressed, including nested transactions. Querying databases. Others have recommended list comprehensions, a construct found in some functional languages, as a query notation. Comprehensions are clear, concise, powerful, mathematically tractable and well integrated with a functional manipulation language. In this thesis comprehensions are proved to be adequately powerful, or relationally complete. Database and programming language theories are further integrated by describing the relational calculus in a programming language semantics. Finally, the mathematical tractability of the notation is used to improve the efficiency of list comprehension queries. For each major conventional improvement an analogous comprehension transformation is given.</p

    SymGrid-Par: a standard skeleton-based framework for computational algebra systems

    No full text
    The SymGrid-Par framework is being developed as part of the European FP6 SCIEnce project (I3-026133) to provide a standard skeleton-based framework for parallelising large computational algebra problems in the Maple, GAP, Kant and Mupad systems. The computational algebra community uses a number of domain specific high level languages each with specific capabilities, for example GAP specialises in computations over groups. The community are keen to develop standards, to improve interoperability between computer algebra systems (CAS), and to avoid duplicating implementation effort. Algebraic computations are challenging to parallelise as they are symbolic rather than numeric, and hence require a relatively rich set of data structures. Parallel tasks are often generated dynamically, and are of highly irregular size, e.g. varying in size by 5 orders of magnitude. SymGrid-Par orchestrates sequential computational algebra (CA) components into a parallel, and possibly grid-enabled application. It provides a native skeleton-based interface to the CA programmer, so for example a GAP programmer might invoke a parallel GAP map function. There are both generic skeletons like map and reduce, and domain specific skeletons like orbit and transitive closure. The skeletons are implemented by a coordination server that distributes the work to multiple instances of the sequential CAS on multiple processors; dynamically manages load distribution; and reassembles the results for return to the invoking CAS. The coordination server exploits the dynamic parallelism and load management capabilities of the Eden and GpH parallel Haskells. Invocations between SymGrid-Par components use our new standardised SCSCP protocol, currently supported by 7 CAS, and mathematical objects are represented in the standard XML-based OpenMath format. The generic SymGrid-Par framework delivers performance comparable with, and typically better than, a specialised parallel CAS implementation like ParGAP. Many CA problems have large task granularity, and hence SymGrid-Par gives good performance on both cluster and multicore architectures. Moreover, the standardised interface can be exploited to orchestrate multiple CAS to solve problems that cannot be solved in a single CAS

    TUNING TASK GRANULARITY AND DATA LOCALITY OF DATA PARALLEL GPH PROGRAMS

    No full text

    Performance comparison of OpenMP and MPI for a concordance benchmark

    No full text

    A Strategic Profiler for Glasgow Parallel Haskell

    No full text

    A performance comparison of MDSDV with AODV and DSDV routing protocols

    No full text
    We present a systematic comparative evaluation of a new multipath routing protocol for MANETS. The new protocol, called Multipath Destination Sequenced Distance Vector (MDSDV) is compared with two known protocols DSDV and AODV. MDSDV finds disjoint paths which do not have any common nodes between a source and destination, and we outline some adaptation of MDSDV over previous work. We evaluate the protocols on a range of MANETS with between 10 and 80 nodes, either static or highly dynamic nodes, and slow, medium or fast node speeds. The protocol comparison metrics are Packet Delivery Fraction (PDF), end-to-end delay, and data dropped
    corecore